可解释的人工智能的最新发展有望改变人类机器人互动的潜力:机器人决策的解释可能会影响用户的看法,证明其可靠性并提高信任。但是,尚未对解释其决定的机器人看法的影响进行彻底研究。为了分析可解释的机器人的效果,我们进行了一项研究,其中两个模拟机器人可以玩竞争性棋盘游戏。当一个机器人解释其动作时,另一个机器人只宣布它们。提供有关其行为的解释不足以改变机器人的感知能力,智力,可爱性或安全等级。但是,结果表明,解释其动作的机器人被认为是更活泼和人类的。这项研究证明了对可解释的人类机器人相互作用的必要性和潜力,以及对其效应作为新的研究方向的更广泛评估。
translated by 谷歌翻译
尽管当前的交互式视频对象细分方法(IVO)依靠基于涂鸦的交互来生成精确的对象掩码,但我们提出了一个基于点击的交互式视频对象细分(CIVOS)框架,以尽可能简化所需的用户工作负载。 CIVOS建立在反映用户互动和掩盖传播的DE耦合模块的基础上。交互模块将基于点击的交互转换为对象掩码,然后通过传播模块推断为其余帧。其他用户交互允许对象蒙版进行改进。该方法对流行的交互式〜戴维斯数据集进行了广泛的评估,但不可避免地适应了基于点击的基于点击的相互作用。我们考虑了在评估过程中生成点击的几种策略,以反映各种用户输入,并调整戴维斯性能指标以执行与硬件无关的比较。提出的CIVOS管道取得了竞争成果,尽管需要较低的用户工作量。
translated by 谷歌翻译
大多数机器学习算法由一个或多个超参数配置,必须仔细选择并且通常会影响性能。为避免耗时和不可递销的手动试验和错误过程来查找性能良好的超参数配置,可以采用各种自动超参数优化(HPO)方法,例如,基于监督机器学习的重新采样误差估计。本文介绍了HPO后,本文审查了重要的HPO方法,如网格或随机搜索,进化算法,贝叶斯优化,超带和赛车。它给出了关于进行HPO的重要选择的实用建议,包括HPO算法本身,性能评估,如何将HPO与ML管道,运行时改进和并行化结合起来。这项工作伴随着附录,其中包含关于R和Python的特定软件包的信息,以及用于特定学习算法的信息和推荐的超参数搜索空间。我们还提供笔记本电脑,这些笔记本展示了这项工作的概念作为补充文件。
translated by 谷歌翻译
基于深度学习的模型,例如经常性神经网络(RNNS),已经应用于各种序列学习任务,取得了巨大的成功。在此之后,这些模型越来越多地替换对象跟踪应用程序的经典方法,用于运动预测。一方面,这些模型可以通过所需的更少建模捕获复杂的对象动态,但另一方面,它们取决于参数调谐的大量训练数据。为此,我们介绍了一种用于在图像空间中产生无人机(UAV)的合成轨迹数据的方法。由于无人机,或者相反的四轮压力机是动态系统,它们不能遵循任意轨迹。通过UAV轨迹实现对应于高阶运动的最小变化的平滑度标准的先决条件,可以利用规划侵略性的四轮机会飞行的方法来通过一系列3D航点产生最佳轨迹。通过将这些机动轨迹投影,该轨迹适合于控制二次调节器,实现图像空间,实现了多功能轨迹数据集。为了证明合成轨迹数据的适用性,我们表明,基于RNN的预测模型,在生成的数据上训练,可以在真实的UAV跟踪数据集上优于经典的参考模型。评估是在公开的反UAV数据集完成的。
translated by 谷歌翻译
在诸如对象跟踪的应用中,时间序列数据不可避免地携带缺失的观察。在基于深度学习的模型的成功之后,对于各种序列学习任务,这些模型越来越替换对象跟踪应用中的经典方法,以推断对象的运动状态。虽然传统的跟踪方法可以处理缺失的观察,但默认情况下,大多数深度同行都不适合这一点。迄今为止,本文介绍了一种基于变压器的方法,用于在可变输入长度轨迹数据中处理缺失的观察。通过连续增加所需推理任务的复杂性,间接地形成模型。从再现无噪声轨迹开始,该模型然后学会从嘈杂的输入中推断出来的轨迹。通过提供缺失的令牌,二进制编码的缺失事件,该模型将学习进入缺少数据,并且Infers在其余输入上调整完整的轨迹。在连续缺失事件序列的情况下,该模型则用作纯预测模型。该方法的能力在反映原型对象跟踪方案的综合数据和实际数据上进行了证明。
translated by 谷歌翻译
在诸如跟踪之类的任务中,时间序列数据不可避免地携带缺失的观察。虽然传统的跟踪方法可以处理缺失的观测,但经常性的神经网络(RNNS)旨在在每一步中接收输入数据。此外,RNN的当前解决方案,例如省略缺失的数据或数据归档,不足以解释所产生的不确定性。迄今为止,本文介绍了一种基于RNN的方法,其提供了用于运动状态估计的完整时间过滤周期。卡尔曼滤波器启发方法,可以处理缺少的观察和异常值。为了提供完整的时间过滤周期,扩展了基本RNN以考虑其精度以考虑更新当前状态而采取观察和相关的信念。生成参数化分布以捕获预测状态的RNN预测模型与RNN更新模型组合,这依赖于预测模型输出和当前观察。通过提供具有屏蔽信息的模型,二进制编码的缺失事件,模型可以克服标准技术的限制来处理缺失的输入值。模型能力在反映了原型行人跟踪方案的合成数据上证明了模型能力。
translated by 谷歌翻译
Diversity Searcher is a tool originally developed to help analyse diversity in news media texts. It relies on a form of automated content analysis and thus rests on prior assumptions and depends on certain design choices related to diversity and fairness. One such design choice is the external knowledge source(s) used. In this article, we discuss implications that these sources can have on the results of content analysis. We compare two data sources that Diversity Searcher has worked with - DBpedia and Wikidata - with respect to their ontological coverage and diversity, and describe implications for the resulting analyses of text corpora. We describe a case study of the relative over- or under-representation of Belgian political parties between 1990 and 2020 in the English-language DBpedia, the Dutch-language DBpedia, and Wikidata, and highlight the many decisions needed with regard to the design of this data analysis and the assumptions behind it, as well as implications from the results. In particular, we came across a staggering over-representation of the political right in the English-language DBpedia.
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译
Kernel machines have sustained continuous progress in the field of quantum chemistry. In particular, they have proven to be successful in the low-data regime of force field reconstruction. This is because many physical invariances and symmetries can be incorporated into the kernel function to compensate for much larger datasets. So far, the scalability of this approach has however been hindered by its cubical runtime in the number of training points. While it is known, that iterative Krylov subspace solvers can overcome these burdens, they crucially rely on effective preconditioners, which are elusive in practice. Practical preconditioners need to be computationally efficient and numerically robust at the same time. Here, we consider the broad class of Nystr\"om-type methods to construct preconditioners based on successively more sophisticated low-rank approximations of the original kernel matrix, each of which provides a different set of computational trade-offs. All considered methods estimate the relevant subspace spanned by the kernel matrix columns using different strategies to identify a representative set of inducing points. Our comprehensive study covers the full spectrum of approaches, starting from naive random sampling to leverage score estimates and incomplete Cholesky factorizations, up to exact SVD decompositions.
translated by 谷歌翻译
We present an automatic method for annotating images of indoor scenes with the CAD models of the objects by relying on RGB-D scans. Through a visual evaluation by 3D experts, we show that our method retrieves annotations that are at least as accurate as manual annotations, and can thus be used as ground truth without the burden of manually annotating 3D data. We do this using an analysis-by-synthesis approach, which compares renderings of the CAD models with the captured scene. We introduce a 'cloning procedure' that identifies objects that have the same geometry, to annotate these objects with the same CAD models. This allows us to obtain complete annotations for the ScanNet dataset and the recent ARKitScenes dataset.
translated by 谷歌翻译